55 research outputs found
Beyond SumBasic: Task-Focused Summarization with Sentence Simplification and Lexical Expansion
In recent years, there has been increased interest in topic-focused multi-document summarization. In this task, automatic summaries are produced in response to a specific information request, or topic, stated by the user. The system we have designed to accomplish this task comprises four main components: a generic extractive summarization system, a topic-focusing component, sentence simplification, and lexical expansion of topic words. This paper details each of these components, together with experiments designed to quantify their individual contributions. We include an analysis of our results on two large datasets commonly used to evaluate task-focused summarization, the DUC2005 and DUC2006 datasets, using automatic metrics. Additionally, we include an analysis of our results on the DUC2006 task according to human evaluation metrics. In the human evaluation of system summaries compared to human summaries, i.e., the Pyramid method, our system ranked first out of 22 systems in terms of overall mean Pyramid score; and in the human evaluation of summary responsiveness to the topic, our system ranked third out of 35 systems
A Controllable Model of Grounded Response Generation
Current end-to-end neural conversation models inherently lack the flexibility
to impose semantic control in the response generation process, often resulting
in uninteresting responses. Attempts to boost informativeness alone come at the
expense of factual accuracy, as attested by pretrained language models'
propensity to "hallucinate" facts. While this may be mitigated by access to
background knowledge, there is scant guarantee of relevance and informativeness
in generated responses. We propose a framework that we call controllable
grounded response generation (CGRG), in which lexical control phrases are
either provided by a user or automatically extracted by a control phrase
predictor from dialogue context and grounding knowledge. Quantitative and
qualitative results show that, using this framework, a transformer based model
with a novel inductive attention mechanism, trained on a conversation-like
Reddit dataset, outperforms strong generation baselines.Comment: AAAI 202
A Neural Network Approach to Context-Sensitive Generation of Conversational Responses
We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines.Comment: A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-20
- …